Wrapped Cauchy distribution (wrapcauchy)#
The wrapped Cauchy is a circular distribution: it models angles or phases by taking a Cauchy random variable on the real line and wrapping it modulo \(2\pi\).
Compared to the von Mises (circular normal), wrapcauchy is a heavy-tailed way to model directional noise: most mass concentrates near a preferred direction, but occasional large angular deviations remain plausible.
Learning goals#
Understand
wrapcauchyas a wrapped (mod-\(2\pi\)) version of a Cauchy distribution.Know the PDF and a usable CDF/quantile expression on \([0,2\pi)\).
Use trigonometric moments to compute circular mean/variance.
Implement sampling from scratch with NumPy and validate against SciPy.
See how to use
scipy.stats.wrapcauchyfor evaluation, simulation, and fitting.
import platform
import numpy as np
import plotly.express as px
import plotly.graph_objects as go
import os
import plotly.io as pio
from plotly.subplots import make_subplots
import scipy
from scipy import optimize
from scipy.stats import chi2, wrapcauchy
pio.templates.default = "plotly_white"
pio.renderers.default = os.environ.get("PLOTLY_RENDERER", "notebook")
np.set_printoptions(precision=4, suppress=True)
rng = np.random.default_rng(7)
print("Python", platform.python_version())
print("NumPy", np.__version__)
print("SciPy", scipy.__version__)
Python 3.12.9
NumPy 1.26.2
SciPy 1.15.0
1) Title & classification#
Name:
wrapcauchyType: continuous distribution (on the circle)
Support: an angle \(\theta\) represented on \([0,2\pi)\) (equivalently, points on the unit circle)
Parameter space: concentration \(c\in(0,1)\) and mean direction \(\mu\in[0,2\pi)\)
We write (one common parameterization):
Notes:
The random variable is modulo \(2\pi\): \(\theta\) and \(\theta + 2\pi k\) represent the same direction.
SciPy exposes
wrapcauchy(c, loc=0, scale=1)as a distribution on an interval of length \(2\pi\,\text{scale}\). For circular work you typically wrap angles yourself.
2) Intuition & motivation#
What it models#
A noisy direction, heading, or phase with a preferred direction \(\mu\) but occasional large deviations.
Typical real-world use cases#
Wind direction deviations around a prevailing direction.
Robot heading / compass errors when occasional outliers happen (magnetic interference).
Phase noise in oscillatory signals (circular variables).
Bearing errors in navigation and tracking.
Relations to other distributions#
Cauchy: if \(Y\sim\mathrm{Cauchy}(\mu,\gamma)\) on \(\mathbb{R}\) and \(\Theta = Y\bmod 2\pi\), then \(\Theta\) is wrapped Cauchy.
Uniform on the circle: as \(c\to 0\), the density approaches \(1/(2\pi)\).
von Mises: often called the “circular normal”;
wrapcauchyis a heavier-tailed alternative.Poisson kernel / harmonic measure: the wrapped Cauchy density is the Poisson kernel on the unit disk; it appears as the exit-angle distribution of planar Brownian motion started at radius \(c\).
3) Formal definition#
Let \(\theta\in[0,2\pi)\) denote an angle.
PDF#
A common parameterization is:
This is the Poisson kernel on the unit circle.
CDF#
On the interval \([0,2\pi)\) (with a cut at \(0\)), define \(\delta = (\theta-\mu)\bmod 2\pi \in [0,2\pi)\) and
Then a convenient closed form is
The split is needed because \(\tan(\delta/2)\) changes sign at \(\delta=\pi\).
Quantile function (inverse CDF)#
For \(p\in(0,1)\), define \(B=(1-c)/(1+c)\). A usable inverse on \([0,2\pi)\) is
Because the variable is circular, CDF/quantile formulas depend on the chosen \([0,2\pi)\) representation.
TAU = 2 * np.pi
def wrap_angle(theta: np.ndarray | float) -> np.ndarray:
'''Map angles to [0, 2π).'''
return np.mod(theta, TAU)
def wrapcauchy_pdf(theta: np.ndarray | float, c: float, mu: float = 0.0) -> np.ndarray:
theta = np.asarray(theta, dtype=float)
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
delta = wrap_angle(theta - mu)
denom = 1.0 + c * c - 2.0 * c * np.cos(delta)
return (1.0 - c * c) / (TAU * denom)
def wrapcauchy_logpdf(theta: np.ndarray | float, c: float, mu: float = 0.0) -> np.ndarray:
theta = np.asarray(theta, dtype=float)
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
delta = wrap_angle(theta - mu)
denom = 1.0 + c * c - 2.0 * c * np.cos(delta)
return np.log1p(-c * c) - np.log(TAU) - np.log(denom)
def _wrapcauchy_cdf_delta(delta: np.ndarray, c: float) -> np.ndarray:
'''CDF for delta in [0, 2π).'''
a = (1.0 + c) / (1.0 - c)
base = np.arctan(a * np.tan(delta / 2.0)) / np.pi
return np.where(delta <= np.pi, base, 1.0 + base)
def wrapcauchy_cdf(theta: np.ndarray | float, c: float, mu: float = 0.0) -> np.ndarray:
theta = np.asarray(theta, dtype=float)
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
delta = wrap_angle(theta - mu)
return _wrapcauchy_cdf_delta(delta, c)
def wrapcauchy_ppf(p: np.ndarray | float, c: float, mu: float = 0.0) -> np.ndarray:
p = np.asarray(p, dtype=float)
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
if np.any((p <= 0.0) | (p >= 1.0)):
raise ValueError("p must be in (0, 1)")
b = (1.0 - c) / (1.0 + c)
# Handle p very close to 0.5 to avoid tan(pi p) overflow in finite precision.
delta = 2.0 * np.arctan(b * np.tan(np.pi * p))
delta = np.where(np.isclose(p, 0.5), np.pi, delta)
return wrap_angle(mu + delta)
def sample_wrapcauchy_numpy(
n: int,
c: float,
mu: float = 0.0,
rng: np.random.Generator | None = None,
) -> np.ndarray:
'''Sample Θ ~ WrappedCauchy(μ, c) using NumPy only.
Uses the representation Θ = (μ + Y) mod 2π with Y ~ Cauchy(0, γ) and γ = -log c.
'''
if rng is None:
rng = np.random.default_rng()
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
gamma = -np.log(c)
u = rng.random(n)
eps = np.finfo(float).eps
u = np.clip(u, eps, 1.0 - eps)
y = gamma * np.tan(np.pi * (u - 0.5))
return wrap_angle(mu + y)
# Quick cross-check against SciPy for μ = 0 on [0, 2π)
c = 0.7
x = np.linspace(0.0, TAU, 11, endpoint=False)
print("max |pdf - scipy|:", np.max(np.abs(wrapcauchy_pdf(x, c) - wrapcauchy.pdf(x, c))))
print("max |cdf - scipy|:", np.max(np.abs(wrapcauchy_cdf(x, c) - wrapcauchy.cdf(x, c))))
p = np.linspace(0.05, 0.95, 10)
print("max |ppf - scipy|:", np.max(np.abs(wrapcauchy_ppf(p, c) - wrapcauchy.ppf(p, c))))
max |pdf - scipy|: 0.0
max |cdf - scipy|: 0.0
max |ppf - scipy|: 1.7763568394002505e-15
4) Moments & properties#
Because wrapcauchy lives on a circle, the most natural moments are trigonometric moments.
Mean, variance, skewness, kurtosis#
The usual (linear) moments of \(\Theta\in[0,2\pi)\) all exist because the support is bounded.
However, linear mean/variance can be misleading for angles (they depend on the cut at \(0\)).
For circular data we instead use moments of \(e^{i\Theta}\).
Define the trigonometric moments for integer \(k\):
For the wrapped Cauchy,
In particular,
\(\mathbb{E}[e^{i\Theta}] = c e^{i\mu}\)
the mean direction is \(\mu\) (argument of \(m_1\))
the mean resultant length is \(R = |m_1| = c\) (a concentration measure)
A common circular variance is
Because the density is symmetric about \(\mu\), the distribution has zero circular skewness under standard definitions; peakedness is reflected by \(m_2=c^2\) and related circular kurtosis measures.
MGF and characteristic function#
Since \(\Theta\in[0,2\pi)\), the MGF \(M(t)=\mathbb{E}[e^{t\Theta}]\) exists for all real \(t\).
Closed forms for general real \(t\) are not commonly used; numerically, you can integrate.
The most useful “characteristic function” on the circle is the Fourier sequence \(m_k\) above.
Entropy#
The differential entropy is
As \(c\to 0\), \(f\to 1/(2\pi)\) and \(h\to \log(2\pi)\) (maximum for the circle). As \(c\to 1\), the density becomes very concentrated and the entropy decreases.
def trig_moment(k: int, c: float, mu: float = 0.0) -> complex:
'''E[e^{i k Θ}] for integer k.'''
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
return (c ** abs(k)) * np.exp(1j * k * mu)
def circular_mean_direction(theta: np.ndarray) -> float:
'''Angle of the sample mean resultant.'''
z = np.mean(np.exp(1j * theta))
return float(wrap_angle(np.angle(z)))
def mean_resultant_length(theta: np.ndarray) -> float:
return float(np.abs(np.mean(np.exp(1j * theta))))
c, mu = 0.8, 0.6
n = 200_000
s = sample_wrapcauchy_numpy(n=n, c=c, mu=mu, rng=rng)
m1_hat = np.mean(np.exp(1j * s))
print("Monte Carlo m1:", m1_hat)
print("Theory m1:", trig_moment(1, c, mu))
print("|m1| (R):", abs(m1_hat), "(theory", c, ")")
print("mean direction:", circular_mean_direction(s), "(theory", mu, ")")
# Compare to SciPy's linear moments (on the [0, 2π) cut)
mean_lin, var_lin, skew_lin, kurt_lin = wrapcauchy.stats(c, moments="mvsk")
print()
print("SciPy stats on [0,2π):")
print("mean:", float(mean_lin))
print("var:", float(var_lin))
print("skew:", float(skew_lin))
print("kurtosis(excess):", float(kurt_lin))
Monte Carlo m1: (0.6616846079018803+0.4525444672960833j)
Theory m1: (0.6602684919277427+0.4517139787160283j)
|m1| (R): 0.8016377082039997 (theory 0.8 )
mean direction: 0.599857582268681 (theory 0.6 )
SciPy stats on [0,2π):
mean: 3.141592653589793
var: 7.5890465337294515
skew: 6.797345980028823e-16
kurtosis(excess): -1.895697594807302
def wrapcauchy_entropy(c: float, grid_size: int = 200_000) -> float:
if not (0.0 < c < 1.0):
raise ValueError("c must be in (0, 1)")
theta = np.linspace(0.0, TAU, grid_size, endpoint=False)
f = wrapcauchy_pdf(theta, c)
dx = TAU / grid_size
return float(-np.sum(f * np.log(f)) * dx)
cs = np.linspace(0.02, 0.98, 25)
hs = np.array([wrapcauchy_entropy(ci) for ci in cs])
fig = go.Figure()
fig.add_trace(go.Scatter(x=cs, y=hs, mode="lines+markers"))
fig.add_hline(y=np.log(TAU), line_dash="dash", annotation_text="log(2π) (uniform)")
fig.update_layout(
title="Differential entropy vs concentration c",
xaxis_title="c",
yaxis_title="h(Θ)",
width=850,
height=380,
)
fig.show()
5) Parameter interpretation#
\(\mu\) (mean direction / mode): rotates the distribution around the circle.
\(c\) (concentration): controls how tightly the distribution clusters around \(\mu\).
\(c\approx 0\): nearly uniform.
\(c\approx 1\): extremely concentrated with a sharp peak.
A useful connection is to the unwrapped Cauchy scale \(\gamma\):
If \(Y\sim\mathrm{Cauchy}(\mu,\gamma)\) and \(\Theta = Y\bmod 2\pi\), then \(\Theta\sim\mathrm{WrappedCauchy}(\mu,c)\) with \(c=e^{-\gamma}\).
# Shape changes: PDF and CDF for different c (μ = 0)
mu = 0.0
cs = [0.1, 0.4, 0.7, 0.9]
theta = np.linspace(0.0, TAU, 2000, endpoint=False)
fig = make_subplots(rows=1, cols=2, subplot_titles=("PDF on [0, 2π)", "CDF on [0, 2π)"))
for c in cs:
f = wrapcauchy_pdf(theta, c, mu=mu)
F = wrapcauchy_cdf(theta, c, mu=mu)
fig.add_trace(go.Scatter(x=theta, y=f, mode="lines", name=f"c={c}"), row=1, col=1)
fig.add_trace(go.Scatter(x=theta, y=F, mode="lines", name=f"c={c}"), row=1, col=2)
fig.update_xaxes(title_text="θ (radians)", row=1, col=1)
fig.update_xaxes(title_text="θ (radians)", row=1, col=2)
fig.update_yaxes(title_text="f(θ)", row=1, col=1)
fig.update_yaxes(title_text="F(θ)", row=1, col=2)
fig.update_layout(width=1000, height=380)
fig.show()
6) Derivations#
Expectation (trigonometric moments)#
Represent \(\Theta\) by wrapping a Cauchy:
Let \(Y\sim\mathrm{Cauchy}(\mu,\gamma)\) on \(\mathbb{R}\) with characteristic function
Define \(\Theta = Y \bmod 2\pi\) (mapped into \([0,2\pi)\)).
For any integer \(k\),
because adding \(2\pi\) multiples does not change \(e^{ik\cdot}\) for integer \(k\).
Therefore
Setting \(c=e^{-\gamma}\) gives
Variance#
A common circular variance uses the first trigonometric moment \(m_1\):
(Linear variance on \([0,2\pi)\) exists but is not rotation-invariant and is usually not what you want for angles.)
Likelihood#
For observations \(\theta_1,\dots,\theta_n\) (treated modulo \(2\pi\)), the log-likelihood is
with
There is no simple closed-form MLE for \((\mu,c)\), but it is a smooth 2D optimization problem.
def wrapcauchy_loglik(theta: np.ndarray, c: float, mu: float) -> float:
return float(np.sum(wrapcauchy_logpdf(theta, c=c, mu=mu)))
def mle_wrapcauchy(theta: np.ndarray) -> tuple[float, float]:
'''Numerical MLE for (c, μ) on the circle.'''
theta = wrap_angle(np.asarray(theta, dtype=float))
# Good initial guess from the sample mean resultant
z = np.mean(np.exp(1j * theta))
mu0 = wrap_angle(np.angle(z))
c0 = np.clip(abs(z), 1e-4, 1.0 - 1e-4)
def nll(params: np.ndarray) -> float:
logit_c, mu = float(params[0]), float(params[1])
c = 1.0 / (1.0 + np.exp(-logit_c))
c = np.clip(c, 1e-12, 1.0 - 1e-12)
mu = wrap_angle(mu)
return -wrapcauchy_loglik(theta, c=c, mu=mu)
x0 = np.array([np.log(c0 / (1.0 - c0)), mu0])
res = optimize.minimize(nll, x0=x0, method="BFGS")
logit_c_hat, mu_hat = res.x
c_hat = 1.0 / (1.0 + np.exp(-logit_c_hat))
mu_hat = wrap_angle(mu_hat)
return float(c_hat), float(mu_hat)
# MLE demo
c_true, mu_true = 0.75, 1.1
n = 2_000
x = sample_wrapcauchy_numpy(n=n, c=c_true, mu=mu_true, rng=rng)
c_hat, mu_hat = mle_wrapcauchy(x)
print("true c, μ:", c_true, mu_true)
print("mle c, μ:", c_hat, mu_hat)
true c, μ: 0.75 1.1
mle c, μ: 0.7619482686444916 1.1108576330168647
7) Sampling & simulation (NumPy-only)#
Algorithm#
Use the “unwrap + wrap” representation:
Choose \(c\in(0,1)\) and set \(\gamma=-\log c\).
Sample \(Y\sim\mathrm{Cauchy}(0,\gamma)\) on \(\mathbb{R}\).
Return \(\Theta = (\mu + Y)\bmod 2\pi\).
Why this works:
Wrapping mod \(2\pi\) creates a circular variable.
For integer \(k\), \(e^{ik\Theta}=e^{ikY}\), so the trigonometric moments match those of a Cauchy.
The Cauchy characteristic function gives \(\mathbb{E}[e^{ik\Theta}] = e^{ik\mu-\gamma|k|}=c^{|k|}e^{ik\mu}\), which characterizes the wrapped Cauchy.
The function sample_wrapcauchy_numpy above implements this using the Cauchy inverse CDF via a tangent transform.
# Simulation sanity check: histogram (linear) and circular mean direction
c, mu = 0.85, 0.4
n = 20_000
s = sample_wrapcauchy_numpy(n, c=c, mu=mu, rng=rng)
print("sample mean direction:", circular_mean_direction(s))
print("sample R:", mean_resultant_length(s), "(theory", c, ")")
# Linear histogram on [0, 2π)
hist, edges = np.histogram(s, bins=80, range=(0.0, TAU), density=True)
centers = 0.5 * (edges[:-1] + edges[1:])
fig = go.Figure()
fig.add_trace(go.Bar(x=centers, y=hist, width=edges[1]-edges[0], name="MC histogram"))
fig.add_trace(
go.Scatter(x=centers, y=wrapcauchy_pdf(centers, c=c, mu=mu), mode="lines", name="PDF")
)
fig.update_layout(
title="Monte Carlo histogram vs PDF (linear cut at 0)",
xaxis_title="θ (radians)",
yaxis_title="density",
width=900,
height=380,
)
fig.show()
sample mean direction: 0.3966025177674441
sample R: 0.8497815832008019 (theory 0.85 )
8) Visualization#
We’ll visualize:
PDF on \([0,2\pi)\)
CDF on \([0,2\pi)\)
Monte Carlo samples on the unit circle (a natural visualization for angles)
c, mu = 0.8, 0.7
theta = np.linspace(0.0, TAU, 2000, endpoint=False)
f = wrapcauchy_pdf(theta, c=c, mu=mu)
F = wrapcauchy_cdf(theta, c=c, mu=mu)
s = sample_wrapcauchy_numpy(8_000, c=c, mu=mu, rng=rng)
fig = make_subplots(
rows=1,
cols=3,
subplot_titles=("PDF", "CDF", "Samples on unit circle"),
specs=[[{}, {}, {"type": "polar"}]],
)
fig.add_trace(go.Scatter(x=theta, y=f, mode="lines", name="pdf"), row=1, col=1)
fig.add_trace(go.Scatter(x=theta, y=F, mode="lines", name="cdf"), row=1, col=2)
# Polar histogram of samples
bins = 48
hist, edges = np.histogram(s, bins=bins, range=(0.0, TAU), density=True)
centers = 0.5 * (edges[:-1] + edges[1:])
fig.add_trace(
go.Barpolar(theta=centers, r=hist, width=(TAU / bins) * 0.98, name="samples"),
row=1,
col=3,
)
fig.update_xaxes(title_text="θ", row=1, col=1)
fig.update_xaxes(title_text="θ", row=1, col=2)
fig.update_yaxes(title_text="f(θ)", row=1, col=1)
fig.update_yaxes(title_text="F(θ)", row=1, col=2)
fig.update_layout(width=1150, height=380, showlegend=False)
fig.show()
9) SciPy integration (scipy.stats.wrapcauchy)#
SciPy provides:
wrapcauchy.pdf(x, c)/logpdfwrapcauchy.cdf(x, c)/ppfwrapcauchy.rvs(c, size=..., random_state=...)wrapcauchy.fit(data, ...)for MLE-style fitting
Important practical note:
SciPy treats
wrapcauchyas a distribution on an interval of length \(2\pi\).To use a mean direction \(\mu\) in a circular sense, it is often easiest to work with wrapped differences \(\delta=(\theta-\mu)\bmod 2\pi\).
# Basic SciPy usage on the canonical [0, 2π) cut
c = 0.65
x = np.linspace(0.0, TAU, 8, endpoint=False)
print("pdf:", wrapcauchy.pdf(x, c))
print("cdf:", wrapcauchy.cdf(x, c))
# SciPy sampling (on [0, 2π))
s_scipy = wrapcauchy.rvs(c, size=5_000, random_state=7)
print("sample mean direction (SciPy samples):", circular_mean_direction(s_scipy))
# Fitting example: recover c and a circular location
#
# SciPy's (loc, scale) define an *interval* [loc, loc + 2π*scale). For circular work,
# it helps to "re-cut" angles so they're represented inside that interval.
def recut_to_interval(theta: np.ndarray, loc: float) -> np.ndarray:
theta = wrap_angle(theta)
return np.where(theta < loc, theta + TAU, theta)
c_true, mu_true = 0.8, 1.3
n = 3_000
s = sample_wrapcauchy_numpy(n, c=c_true, mu=mu_true, rng=rng)
# Choose a cut near the data's mean direction, then fit on that interval.
mu0 = circular_mean_direction(s)
s_recut = recut_to_interval(s, loc=mu0)
# fit returns (c, loc, scale). We'll fix scale=1 for circular work.
c_hat, loc_hat, scale_hat = wrapcauchy.fit(s_recut, fscale=1)
print()
print("true (c, μ):", c_true, mu_true)
print("fit c:", c_hat)
print("fit loc (interval cut):", loc_hat, "scale:", scale_hat)
print("fit μ (wrapped loc):", float(wrap_angle(loc_hat)))
# Compare: our circular MLE estimates (c, μ)
c_hat2, mu_hat2 = mle_wrapcauchy(s)
print("mle (c, μ):", c_hat2, mu_hat2)
pdf: [0.7503 0.1826 0.0646 0.0392 0.0338 0.0392 0.0646 0.1826]
cdf: [0. 0.3493 0.4335 0.4721 0.5 0.5279 0.5665 0.6507]
sample mean direction (SciPy samples): 0.009233644109653879
true (c, μ): 0.8 1.3
fit c: 0.8038418546610053
fit loc (interval cut): 1.3055669821339948 scale: 1
fit μ (wrapped loc): 1.3055669821339948
mle (c, μ): 0.8038775620591454 1.3051673737537706
10) Statistical use cases#
Hypothesis testing#
Example: test whether the mean direction equals a specified \(\mu_0\).
A simple approach is a likelihood ratio test comparing:
\(H_0: \mu=\mu_0\) (optimize only \(c\))
\(H_1: \mu\) free (optimize \(c\) and \(\mu\))
Bayesian modeling#
For Bayesian inference, wrapcauchy is a convenient likelihood for angles when you want outliers.
A simple, fully self-contained demo: compute a grid posterior over \((\mu,c)\) with a uniform prior on \(\mu\) and a Beta prior on \(c\).
Generative modeling#
As a noise model in a generative process:
# Hypothesis test: H0 μ = μ0 vs H1 μ free (LR test)
# Synthetic data
c_true, mu_true = 0.75, 0.9
n = 800
x = sample_wrapcauchy_numpy(n, c=c_true, mu=mu_true, rng=rng)
mu0 = 0.0 # null mean direction
# MLE under H1
c_hat, mu_hat = mle_wrapcauchy(x)
ll1 = wrapcauchy_loglik(x, c=c_hat, mu=mu_hat)
# MLE under H0: optimize only c with μ fixed
def nll_c(logit_c: float) -> float:
c = 1.0 / (1.0 + np.exp(-logit_c))
c = np.clip(c, 1e-12, 1.0 - 1e-12)
return -wrapcauchy_loglik(x, c=c, mu=mu0)
res0 = optimize.minimize_scalar(nll_c, bounds=(-10, 10), method="bounded")
c0 = 1.0 / (1.0 + np.exp(-res0.x))
ll0 = wrapcauchy_loglik(x, c=c0, mu=mu0)
lr = 2.0 * (ll1 - ll0)
p = 1.0 - chi2.cdf(lr, df=1)
print("H0 μ=", mu0)
print("H1 MLE (c, μ)=", c_hat, mu_hat)
print("LR statistic=", lr)
print("approx p-value (χ² df=1)=", p)
H0 μ= 0.0
H1 MLE (c, μ)= 0.7425742951471538 0.9118284272710974
LR statistic= 1025.1174573307458
approx p-value (χ² df=1)= 0.0
# Bayesian demo: grid posterior for (μ, c)
# Data
c_true, mu_true = 0.8, 1.4
n = 300
x = sample_wrapcauchy_numpy(n, c=c_true, mu=mu_true, rng=rng)
mu_grid = np.linspace(0.0, TAU, 180, endpoint=False)
c_grid = np.linspace(0.02, 0.98, 120)
# Priors: μ ~ Uniform(0,2π), c ~ Beta(a,b) on (0,1)
a, b = 2.0, 2.0
log_prior_c = (a - 1.0) * np.log(c_grid) + (b - 1.0) * np.log1p(-c_grid)
# Log-likelihood on grid
log_post = np.empty((len(c_grid), len(mu_grid)))
for i, c in enumerate(c_grid):
for j, mu in enumerate(mu_grid):
log_post[i, j] = wrapcauchy_loglik(x, c=c, mu=mu) + log_prior_c[i]
# Normalize stably
log_post -= np.max(log_post)
post = np.exp(log_post)
post /= np.sum(post)
# Posterior means (circular for μ)
mu_mean = np.angle(np.sum(post * np.exp(1j * mu_grid)[None, :]))
mu_mean = wrap_angle(mu_mean)
c_mean = float(np.sum(post * c_grid[:, None]))
print("posterior mean μ:", mu_mean)
print("posterior mean c:", c_mean)
fig = px.imshow(
post,
x=mu_grid,
y=c_grid,
aspect="auto",
labels={"x": "μ", "y": "c", "color": "posterior"},
title="Grid posterior p(μ,c | data)",
)
fig.update_layout(width=900, height=420)
fig.show()
posterior mean μ: 1.4254246315029266
posterior mean c: 0.8075766191924449
# Generative modeling: add wrapped-Cauchy noise to a true direction
# Helper: map angles to (-π, π] for signed-noise intuition
def wrap_to_pi(theta: np.ndarray | float) -> np.ndarray:
return (np.asarray(theta, dtype=float) + np.pi) % TAU - np.pi
T = 200
true_mu = wrap_angle(np.linspace(0, 1.5 * np.pi, T))
# Noise centered at 0 on the circle
noise_angle = sample_wrapcauchy_numpy(T, c=0.85, mu=0.0, rng=rng)
noise = wrap_to_pi(noise_angle)
obs = wrap_angle(true_mu + noise)
# Unwrap only for visualization (removes the artificial jumps from the [0,2π) cut)
true_unwrapped = np.unwrap(true_mu)
obs_unwrapped = np.unwrap(obs)
fig = go.Figure()
fig.add_trace(go.Scatter(y=true_unwrapped, mode="lines", name="true μ(t)"))
fig.add_trace(go.Scatter(y=obs_unwrapped, mode="markers", name="observed θ(t)", opacity=0.6))
fig.update_layout(
title="True direction with wrapped-Cauchy observation noise (unwrapped view)",
xaxis_title="time index",
yaxis_title="angle (radians)",
width=900,
height=360,
)
fig.show()
11) Pitfalls#
Angles are circular: the linear mean on \([0,2\pi)\) can be meaningless (SciPy’s
statsreturns a constant mean \(\pi\) for the default cut).Units: the formulas here assume radians. If you have degrees, convert with
np.deg2rad.Parameter constraints: require \(0<c<1\). Values extremely close to \(1\) produce a very sharp peak.
Numerical stability: for \(c\approx 1\) and \(\theta\approx\mu\), use
logpdfto avoid under/overflow.CDF branch cut: closed forms use
tan(θ/2); handle the split at \(\pi\) carefully.SciPy
loc/scale:wrapcauchyis implemented as an interval distribution; for circular shifting, wrap your angles (or re-cut the interval consistently).
12) Summary#
wrapcauchyis a circular distribution with density $\(f(\theta)=\frac{1-c^2}{2\pi(1+c^2-2c\cos(\theta-\mu))}. \)$It can be generated by wrapping a Cauchy: \(\Theta=(\mu+Y)\bmod 2\pi\) with \(Y\sim\mathrm{Cauchy}(0,-\log c)\).
The key analytic property is the trigonometric moment formula $\(\mathbb{E}[e^{ik\Theta}] = c^{|k|} e^{ik\mu}. \)$
For circular data, prefer mean direction and resultant length over linear moments.
scipy.stats.wrapcauchysupports evaluation, simulation, and MLE-style fitting on \([0,2\pi)\).